摘要 :
In this paper, we propose a new scheme to generate region-of-interest (ROI) enhanced depth maps combining one low-resolution depth camera with high-resolution stereoscopic cameras. Basically, the hybrid camera system produces four...
展开
In this paper, we propose a new scheme to generate region-of-interest (ROI) enhanced depth maps combining one low-resolution depth camera with high-resolution stereoscopic cameras. Basically, the hybrid camera system produces four synchronized images at each frame: left and right images from the stereoscopic cameras, a color image and its associated depth map from the depth camera. In the hybrid camera system, after estimating initial depth information for the left image using a stereo matching algorithm, we project depths obtained from the depth camera onto ROI of the left image using three-dimensional (3-D) image warping. Then, the warped depths are linearly interpolated to fill depth holes occurred in ROI. Finally, we merge the ROI depths with background ones extracted from the initial depth information to generate the ROI enhanced depth map. Experimental results show that the proposed depth acquisition system provides more accurate depth information for ROI than previous stereo matching algorithms. Besides, the proposed scheme minimizes inherent problems of the current depth camera, such as limitation of its measuring distance and production of low-resolution depth maps.
收起
摘要 :
We present a novel depth from focus technique. Following prior work, our pipeline starts with a focal stack and an estimation of the amount of defocus as given by, for instance, the ring difference filter. To improve robustness to...
展开
We present a novel depth from focus technique. Following prior work, our pipeline starts with a focal stack and an estimation of the amount of defocus as given by, for instance, the ring difference filter. To improve robustness to outliers while avoiding to rely on costly nonlinear optimizations, we propose an original scheme that linearly scans the profile over a fixed size window, searching for the best peak within each window using a linearized least-squares Laplace regression. As a post-process, depth estimates with low confidence are reconstructed though an adaptive moving least squares filter. We show how to objectively evaluate the performance of our approach by generating synthetic focal stacks from which the reconstructed depth maps can be compared to ground truth. Our results show that our method achieves higher accuracy than previous nonlinear Laplace regression technique, while being orders of magnitude faster.
收起
摘要 :
This paper presents a method for increasing spatial resolution of a depth map using its corresponding high-resolution (HR) color image as a guide. Most of the previous methods rely on the assumption that depth discontinuities are ...
展开
This paper presents a method for increasing spatial resolution of a depth map using its corresponding high-resolution (HR) color image as a guide. Most of the previous methods rely on the assumption that depth discontinuities are highly correlated with color boundaries, leading to artifacts in the regions where the assumption is broken. To prevent scene texture from being erroneously transferred to reconstructed scene surfaces, we propose a framework for dividing the color image into different regions and applying different methods tailored to each region type. For the region classification, we first segment the low-resolution (LR) depth map into regions of smooth surfaces, and then use them to guide the segmentation of the color image. Using the consensus of multiple image segmentations obtained by different super-pixel generation methods, the color image is divided into continuous and discontinuous regions: in the continuous regions, their HR depth values are interpolated from LR depth samples without exploiting the color information. In the discontinuous regions, their HR depth values are estimated by sequentially applying more complicated depth-histogram-based methods. Through experiments, we show that each step of our method improves depth map upsampling both quantitatively and qualitatively. We also show that our method can be extended to handle real data with occluded regions caused by the displacement between color and depth sensors.
收起
摘要 :
Depth maps have been used in many vision tasks due to the real-time acquisition and low cost of consumer depth cameras. However, they still suffer from low precision and severe sensor noise, even with the significant research in d...
展开
Depth maps have been used in many vision tasks due to the real-time acquisition and low cost of consumer depth cameras. However, they still suffer from low precision and severe sensor noise, even with the significant research in depth enhancement. We propose a novel multi-level feature fusion convolutional neural network (CNN) for facial depth map refinement named MFFNet. It is a multi-stage network, where each stage is a local multi-level feature fusion (LMLF) block. For smoothing the noise as well as boosting detailed facial structure, a hierarchical fusion strategy is adopted to fully fuse multi-level features, i.e., an LMLF block fuses multi-level features locally in each stage, while inter-stage skip connections are employed to reach a global multi-level feature fusion. Moreover, the inter-stage skip connections can also ease the training through shortening the information propagation paths. We introduce an effective data augmentation method to synthesize noisy facial depth maps of various poses. Training with these synthetic data improves the robustness of the proposed method to face poses. The proposed method is evaluated with a synthetic facial depth map dataset, a real Kinect V2 facial depth map dataset and the Middlebury Stereo Dataset. Experimental results show that our method produces refined depth maps with high quality and outperforms several state-of-the-art methods.
收起
摘要 :
Let D be a dendrite with unique branch point and f : D -> D be continuous. Denote by R(f) and Omega(f) the set of recurrent points and the set of non-wandering points of f, respectively. Let Omega(0)(f) = D and Omega(k)(f ) = Omeg...
展开
Let D be a dendrite with unique branch point and f : D -> D be continuous. Denote by R(f) and Omega(f) the set of recurrent points and the set of non-wandering points of f, respectively. Let Omega(0)(f) = D and Omega(k)(f ) = Omega(f vertical bar(Omega k-1(f))) for any positive integer k. The minimal k such that Omega(k)(f) = Omega(k+1)(f) is called the depth of f, where k is a positive integer or infinity. In this note, we show that Omega(2)(f) = <(R(f))over bar> and the depth off is at most 2. (C) 2020 Elsevier B.V. All rights reserved.
收起
摘要 :
Let be a dendrite with finite branch points and be continuous. Denote by R(f) and the set of recurrent points and the set of non-wandering points of f respectively. Let Omega(0) (f) = D and Omega(n) (f) = Omega (f vertical bar Ome...
展开
Let be a dendrite with finite branch points and be continuous. Denote by R(f) and the set of recurrent points and the set of non-wandering points of f respectively. Let Omega(0) (f) = D and Omega(n) (f) = Omega (f vertical bar Omega(n-1(f))) for all n is an element of N. The minimal m is an element of N boolean OR {infinity} such that Omega(m) (f) = Omega(m+1) (f) is called the depth of f. In this note, we show that Omega(3) (f) = <(R(f))over bar> and the depth of f is at most 3. Furthermore, we show that there exist a dendrite D with finite branch points and f is an element of C-0 (D) such that Omega(3) (f) = <(R(f))over bar> not equal Omega(2)(f).
收起
摘要 :
Displacement mapping has been widely used for adding geometric surface details to 3D mesh models. However, it requires sufficient tessellation of the mesh if fine details are to be represented. In this paper, we propose a method f...
展开
Displacement mapping has been widely used for adding geometric surface details to 3D mesh models. However, it requires sufficient tessellation of the mesh if fine details are to be represented. In this paper, we propose a method for applying the displacement mapping even on coarse models by using an augmented patch mesh. The patch mesh is a regularly tessellated flat square mesh, which is mapped onto the target area. Our method applies displacement mapping to the patch mesh for fitting it to the original mesh as well as for adding surface details. We generate a patch map, which stores three-dimensional displacements from the patch mesh to the original mesh. A displacement map is also provided for defining the new surface feature. The target area in the original mesh is then replaced with the patch mesh, and the patch mesh reconstructs the original shape using the patch map and the new surface detail is added using the displacement map. Our results show that our method conveniently adds surface features to various models. The proposed method is particularly useful if the surface features change dynamically since the original mesh is preserved and the separate patch mesh overwrites the target area at runtime.
收起
摘要 :
Existing depth map-based super-resolution (SR) methods cannot achieve satisfactory results in depth map detail restoration. For example, boundaries of the depth map are always difficult to reconstruct effectively from the low-reso...
展开
Existing depth map-based super-resolution (SR) methods cannot achieve satisfactory results in depth map detail restoration. For example, boundaries of the depth map are always difficult to reconstruct effectively from the low-resolution (LR) guided depth map particularly at big magnification factors. In this paper, we present a novel super-resolution method for single depth map by introducing a deep feedback network (DFN), which can effectively enhance the feature representations at depth boundaries that utilize iterative up-sampling and down-sampling operations, building a deep feedback mechanism by projecting high-resolution (HR) representations to low-resolution spatial domain and then back-projecting to high-resolution spatial domain. The deep feedback (DF) block imitates the process of image degradation and reconstruction iteratively. The rich intermediate high-resolution features effectively tackle the problem of depth boundary ambiguity in depth map super-resolution. Extensive experimental results on the benchmark datasets show that our proposed DFN outperforms the state-of-the-art methods.
收起
摘要 :
Depth Image Based Rendering (DIBR) is a technique that generates virtual views for multiview video applications from 3D video represented in color plus depth format. The depth map is not viewed by end users. However, it helps to g...
展开
Depth Image Based Rendering (DIBR) is a technique that generates virtual views for multiview video applications from 3D video represented in color plus depth format. The depth map is not viewed by end users. However, it helps to generate different views as required by the application. Therefore, the depth maps need to be compressed in a way that it minimizes distortions in the rendered views. By doing so, it would be possible to generate high quality virtual views, using compressed depth maps. This paper presents two mode selection techniques based on genetic algorithms for encoding depth maps. In the proposed techniques, the encoding modes are decided so that the distortion in the rendered views is minimized. Simulation results illustrate that proposed techniques improve the objective quality of the rendered virtual views by up to 2 dB over the Lagrange Optimization based mode selection technique that considers the distortions only in the depth map.
收起
摘要 :
Multiview video plus depth (MVD) is a new 3D video format that would support 3D applications developed by MPEG. Such a format is a combination of texture video and associated depth maps. Consequently, for the efficient transmissio...
展开
Multiview video plus depth (MVD) is a new 3D video format that would support 3D applications developed by MPEG. Such a format is a combination of texture video and associated depth maps. Consequently, for the efficient transmission of 3D video signals, the compression of texture video and also the depth maps is required. Since high computational complexity of jointly coding between the texture video and depth map is still an open question, this paper introduces a low complexity MVD coding algorithm that adaptively utilizes the texture and depth map correlation. Based on the correlation, we propose four efficient techniques, including depth information based fast mode size decision, adaptive disparity estimation in texture coding, motion vector sharing based on the texture image similarity correlation and the SKIP mode decision in depth coding. Experimental results show that the proposed algorithm can significantly reduce the computational complexity of MVD coding while improve the coding performance and achieve better rendering quality1.
收起